Hello. Welcome, folks, to this webinar with, I'm Vincent, your host for today, Vincent Bernard. I'm based in Quebec City. I work at Coveo as a director. And today, we're gonna talk about natural language search. So what's all the I? I'm very excited to be here today with our panelists. Just before getting started, a few, a few groundkeeping items. So you're all muted right now. If you have any questions, please use the q and a feature, of Zoom. So we're gonna take all the questions at the end. Today, the agenda is quite straightforward. We're gonna introduce ourselves, but then right after, we're gonna talk about the traditional search versus the modern search platform. Then we're gonna talk about what and why using natural language processors and NLP in general, the misconception about NLP, and then we're gonna have a q and a section at the end. So just before getting started, I'm very excited, as I said, to be with, my co panelists today. So I'll introduce themselves and if you want, you can open your cameras, folks, we're gonna start. So my first guest, today is Hanyei from Adobe. Can you introduce yourself, please? Hi. Thanks, Vincent. As Vincent mentioned, I'm Hanye, and I am with Adobe Research. I've been leading some of the NLP projects at our lab, big lab, for the past two two and a half years with Adobe, and I'm grateful to be here today with you. Thank you very much. I also have Eric Zimmerman, with me today. Eric, mind introducing yourself? Hey, everybody. My name is Eric Zimmerman. I run the search practice here at Profession. I've got a team of about fifty folks underneath me who implement search solutions across the board, everything from commerce to website to service to internal search. And I've been involved in, at this point, well over a hundred search implementations kind of helping bring this technology into organizations and make it useful. Thank you very much. And last but not least, Kurt Kaggle. Can you introduce yourself, please? Yes, please. My name let's see. My name is Kurt Cagle. I am the managing editor for Data Science Central. I've been working in the technology space, both with search and search related technologies for the better part of thirty years. My and have authored a number of books on the specific area of of search technologies, how to be able to implement them. And I'm looking forward to the panel. Thank you very much. So we have quite a overqualified panel, I'd say, to tackle the discussion we have today, but let's just get started. The first area where we wanna get, your your input on is actually search. So NLP is a technology search as well. I think they combine very well, but then I'd like to understand, maybe why we see a a I'd say a bookstore of popularity of ALP and what's what do we say about a traditional search engine in a modern one that would that that would kind of bear these technologies. So, Hannie, if you don't mind, what's your opinion on these modern search engines versus traditional and why NLP in the middle of all of that? Well, I think we have a little connection issue. So instead I will jump to Eric Zimmerman. Yeah. So, traditionally, when we when we look at, search engines, right, if we go back to early on kind of web search, right, think the excite era, the the Alta Vista era kinda early on, searches search engines were very keyword based. Right? So, specifically, we're looking for if you type in a word, how frequent is in in a result. Right? What total percent of that result does it make up? And if I search for two words, how close are they together? Right? Then kind of over the years came Google and came, you know, Ask Jeeves and other engines that kind of take new approaches to things. Right? Where essentially the job is to bring in additional signals beyond just that keyword based index. Right? And so those signals kind of are a mix of, they're gonna be analytical signals, you know, who searches for what, who clicks on what, how do people interact, and they're going to be content understanding signals. Right? That content understanding is really more of this natural language processing side. Right? And it's natural engine processing. It's knowledge graphs. It's a few other technologies that all kinda come in at times. Right? But But at the end of the day, the idea is how do I help the search platform or search engine better understand the content that's being searched for as well as how do I help it better understand the query to understand if, you know, the term that is being searched for is the focus of the sentence or just a descriptor or a modifier to the focus of the sentence. Thank you very much, Annie. Sorry for the, little disturbance here. But then I was asking actually what's your take on, the whole LP thing in the modern platform, and what's the difference between that and the classic one? So your turn. Absolutely. I apologize for the technical difficulty. Of course, it was the right time for that. Yeah. So, like, when we talk about the traditional search, the first thing that comes to mind is kind of like keyword matching. In the old time, the user would have to type a keyword and the search engine or, like, whatever search platform they're using would have to try to find the exact match for that word. That's not too intelligent. Right? So the problem here is that, first of all, humans, they're not really good with, like, keyword search. A lot of times, we don't even know what word we exactly have to search because we are not really aware of the content of the document that we are going to search in. And sometimes we even search for the wrong word even though we kind of know what we want to search for. Let me give you an example. Let's say I'm trying to search for the section in a document, let's say, in a PDF document since I'm from Adobe, that is talking about, like, artificial intelligence. What do I search for? Do I search for artificial intelligence, or do I search for the keyword AI? Or maybe even, like, the word AI does not occur in the document that I am searching for. Maybe it's rather talking about, let's say, deep learning, which is a subset of AI. Mhmm. So any of you as human being, if you read the document and if I ask you, can you tell me which part of it is talking about the AI? You easily refer me to the section that is talking about deep learning. Because for us human being, the concept of search is not a keyword matching. So keyword matching is not really intuitive for us at all. When I search for AI in a document, I expect the search engine to understand what I mean and kind of look for the concept that I am searching for. And this is the direction that the modern search is going towards. Towards. So we are no longer looking at the keyword matching, but rather looking at the, what is the semantic meaning of that search query that the person is expect and then trying to retrieve the relevant results. So we are even like for example, today in Adobe ac Acrobat Reader, we are no longer doing the exact just keyword matching. We are doing the semantic search. And some search engines, they even go further to bridge the gap between the searching in a video or stack stock of images using NLP. You can type something in Google, for example. Like, type for, like, fluffy cats, and then not only it will show you some relevant websites, but also it shows you, like, some videos possibly on YouTube from, like, a fluffy cat or some images of a fluffy cat. So, like, it uses language and NLP to even search in videos and images. That's the direction that today's modern search is going towards. I really like that example. I think it it speaks a lot. At Coveo, what we do is actually a kind of an hybrid approach between the, the NLP driven semantic search. And on the other end, I'd say something that is based on analytics and behavior as I think there is a middle ground between all these options that that that is the best solution. Kurt, I know that you are reading a lot and and also spending a lot of time in, in in the, literature overall for NLP and search. Can you tell us a little bit about that hybrid or maybe the the the approach that is NLP is a key ingredient for sure, but what's the overall mix of a modern platform? Typically, when you're talking about NLP, there's what what you're trying to do is basically determine context. The information that you have within a given within a given query is usually very limited. And, you know, if I if I ask for, you know, simply keyword searches, oftentimes, keyword searches by themselves are specifically intended to look up in an index. You know, there's an index that basically says, okay, look for this word, look for variations of this word, maybe look for synonyms of this word. But overall, that index is still very limited in terms of its overall scope. Once you start talking about NLP, one of the things that you're beginning to do at that point is to start is to move away from that straight indexed approach and move towards something that's a little more conceptual in nature. So when you talk about a fluffy cat as an example, there is essentially within the more modern version of NLP, a an example where you say, okay. Let's let's kind of figure out what do we mean by cat and look at the permutations there. Let's look at the word fluffy and look at the permutations there. And as you begin moving out, you can basically look and see commonalities that basically occur when you have those particular combinations, and it's those commonalities that basically come back as as conceptual results. So so in essence, you're moving through, through a graph of information in order to be able to find what the most likely meaning is from that. One of the other differences that you have with search is that we're increasingly moving into search as an iterative process rather than simply a single access point into the system. So when I talk about search, it's not just my first initial search. It's okay. I've selected something. That tells the system that this is kind of what I'm looking for, and that changes the context of what I'm looking for in the future as part of the overall process. And I think that it's that particular process that we're seeing where we're we're moving into increasingly getting the context of the conversation that is essentially driving the nature of the NLP search itself. So so it's it's more conversational. It's more more driven towards that idea. Probably a good point to stop. Yeah. Thank you very much. So I I really like where we're going with it. I mean, these classic search engines get boosted with new capabilities such as NLP, and we see that we're plastered everywhere right now. But what is it actually in theory? And if you can obviously explain it to to maybe the audience here that is not familiar with these technologies, I'll go back to you, Hanyi. But can we have a a quick explanation of these technologies, overall that is NLP? Yeah. Absolutely. I want to kind of, like, touch base something that is, like, prerequisite to that. So one thing to pay attention is that, like, with traditional machine learnings, as we introduce more data, the algorithm performance gets better and better. But, eventually, it reaches a plateau, and, fundamentally, the model stops learning. With deep learning, the model performance keeps getting better and better as we introduce new data to the model. Now why am I talking about deep learning? Because a lot of progress in NLP is due to the progress in deep learning and the introduction of transformers and large language models. So, basically, advances in large language models have revolutionized the word of NLP. Like, here at Adobe Research, we have started use looking into and using a lot of these large language models for our projects. So to understand why NLP is gaining momentum and, like, why is it that everybody is talking about NLP these days, We kind of want to look into deep learning a little bit. And then funny enough, deep learning is not even a new concept. It was actually introduced during the eighties. So why is it that there is this, like, hype about deep learning these days? So there are two main reasons. One is that deep learning requires large amount of data. As an example, a model like g p t three, which is like I'm giving that as an example because most people have heard about it. It's basically trained on a large corpus of data. To give you a scale, it was trained on forty five terabyte of text data, which is practically the whole Internet pretty much. So this amount of data, we didn't have access to it a while back. The reason that we can easily access to this data is the digitization. So in the old times, if you wanted to, like, let's say, train a model to detect distinguish between cats and dogs, it would have to go collect lots of pictures of cats, lots of pictures of dogs, and they were probably mostly in the format of analog. Right now, you can just, like, search in Google and get millions of pictures of cats and dogs. Another reason that deep learning is picking up today is that, it requires significant computational power. So, like, you can imagine a model example, like, again, g p d three, it has one hundred seventy five billion parameters. You want to train this model on forty five terabyte of data, so you need a tremendous amount of computational power. Today, with GPUs becoming more and more available and basically cheaper and more affordable, this has become possible. So a lot of basically progressed in the ends because of the progress deep learning and the appearance of all these large that can learn the patterns very well and retrieve the data. And thanks very much. You're breaking up a little bit. I think we got everything. However, it just happened at the end. But then I'll use that as a bridge to ask Eric Eamermann. So now we have amazing computers to compute would you do it yourself? What's your take on building that kind of technology in a search engine or instead using something like Kavio and Compose in architecture? Yeah. So the the big picture here, right, is when you're you're looking to to kind of go and implement search into your your system, right, you essentially have two major approaches these days. Right? You can create what or take what I call a search engine. Right? And this is gonna be a a solar or elastic search. Think, basically a wrap around Apache Lucene is the the long and short of it. Right? And you can build, the capabilities on top of that and essentially layer modules on top. Right? So you can plug in, natural language processing models. Right? You can plug in natural query understanding models. And there's a variety of open source models and and systems out here to do this. Right? But fundamentally, you can essentially assemble all of the Lego blocks, right, to go and build your capability. Right? And when you do that, essentially, there's a few things you have to kinda be aware of. Right? One, Elasticsearch and Solr really built as developer tools. Right? They they interact via APIs. They interact via, you know, programmers making them work. Right? And so do all these models. Right? So you really have to have developers that understand all of this, plug it together, right, and get it working. But also, you have to do a lot of validation and testing. Right? You know, I can't just say, oh, the magic NLP model comes on. It sits on my content. Now everything is great. Right? Not only do I have to use it, do we have to refine it, do I have to adjust parameters and tunings and biasing? I also have to have some analytical system that shows me how well is it working so that I have some data to actually have a feedback loop from and understand how well is it working. Right? This ends up being a a I'll call it call it a relatively large investment for many companies. Right? This can be an absolutely a great way to go if you want to own your search capabilities, you know, ground up. Right? If you wanna own every piece of that, be able to tune it and tweak it for every little intricacy of your bills of your business, that can work well. Right? However, we need to know upfront, there's going to be a significant investment in implementation, development, testing tuning, kind of ongoing product kind of level management, right, for your search product that you're essentially creating in order to keep this running well. Right? The other alternative is to work with a company like Coveo that has built a holistic platform and out of the box, right, and makes that available more as a a SaaS type solution. Right? And so, the Coveos of the world have gone and taken all of these technologies. They've integrated them into a holistic platform. They've done the tuning. They've done the testing. But, typically, the other thing they've done is they've also built a nice business facing UI on top so that it's not only accessible by developers. Right? So that I can take the, I can have my my business user go in and look at the analytics and make tweaks to relevancy and do AB testing and get feedback, you know, on how well are things working. Right? And so they're getting all of those capabilities or things you theoretically can build yourself. Right? It's none of this is is complete magic and pixie dust. Right? But what the platform has done is they built it and then they've iterated over years on improving the relevancy of those models, the accuracy of those models, the, the capabilities of their platform. Right? And they have an engineering team that is dedicated to doing that. Right? Vince, I believe you're part of that engineering team that's kind of dedicated to continue to improve the platform. And so, ultimately, the balance is, do I want to take these tools and and assemble them like a LEGO house, and then maybe I'll try to make that LEGO house nicer and nicer over time? Or do I want to work with a company that's done that for me? And and in a way, I outsource my relevancy kind of capabilities, right, to a team that's going to focus just on that so that I can keep focusing on the rest of my business as I see fit. Yeah. And I mean, you're you're right. I'm in the middle of a building where people are dedicated to building search. And if my job was to sell hammers or something like that, I wouldn't bother doing that. I mean, I see the complexity of what we're abstracting for you for for for our clients, and it's it's it's a full job on its own. Don't worry about that. Kurt, have you thought about, talked about these these large transformer models and all that capabilities that we now have access? So what's your take on the fact of taking these things like BERT and then plugging it in my dataset? Will it work out of the box? Is that magical, or or do we need actually to tune them and to adjust these technologies to the current use case? Is it a one size fit all, or do we need to fine tune everything for every use case? It's all magical. It's it it basically is. There there's there's no way you'd basically fit past the magic pixie dust, and you're good and gold. No. No. No. No. No. The reality is that search, like everything else, is, as I talked about earlier, is contextual. You are basically going to be dealing with information that is organization specific, is content specific, is process specific, and because of that, the nature of the kind of queries and the expectation of the results of those queries is going to vary from organization to organization and from need to need. And as a consequence, the fact that you're dealing with these large datasets, you're still basically needing to to essentially build out the model, partially because that model is something that's gonna take a while to fully reach its plateau, but also because the model over time will change as the requirements externally change. Because once you have found something and once enough people have found something, in general they will use that starting point to be able to do other types of operations. So as a consequence, the training process is not something that's basically done once. It's something that you do need to basically work with to feed the data initially, but also to then take the data and the interactions that you have and use that to refine and change the model over time to be able to reflect what's going on, and and to be able to reflect the the the changing nature of that search and relevancy of that search for different audiences over time. Because of that, that refinement learning process is something that, in general, you want to be able to make. You know, that's that's that's a key part of any particular future piece. It's not static. It's not something that you can basically go in and say, hey. Out of the box, we're gonna do it. One thing I like about the Coveo model is that it is essentially a refinement model that does change over time. It's an adaptive model. And because it's an adaptive model, it is going to effectively get better over time rather than simply getting stale over time. Thanks. And it's a wonderful bridge to, the final section actually, which is misconceptions. I really like these, or pitfalls as I call it because it's usually when you really learn, the way you're thinking might not be very, not very good, but not adapted to the the reality of it. The first guest that I wanna tackle with that would probably be Eric Amerman. So what is your, one of the biggest misconception, misconception in your opinion, regarding searching and LP that we can discuss? So, I'll go with one that that Kirk kind of alluded to. Right? But the the biggest misconception that is someone implementing this I run into every day is that, oh, good. This is magic fairy dust. Right? I I throw this tool at it, and it's just gonna work. Right? And and more so, I throw this tool at it, and it's gonna give me the exact suggestions I would have made. Right? Couple different things here. Right? So Kurt mentioned, you know, everyone's business domain, everybody's knowledge domain, everybody's customer domain is slightly different. Yes. There are broader, I'll call it industry segments that we can often use to start that. Right? But even something as simple as, you know, your company is going to have forms different than the next company, and so are acronyms different than next company, starts to kind of drive complexity here. Right? It starts to drive challenges. The second piece is that I've seen multiple customers now that say, great. We're going to bring in search, but first, we're gonna run NLP over it in order to give us better metadata. Right? We wanna understand better context and understand all of this information. We'll run it want all of our content for some algorithms, and it'll give us all of the data out. Right? And fundamentally, what happens is that expectation, yes, they do get metadata out. They do get better metadata out, but it's not necessarily bounded or controlled in any way. Right? Essentially, they're going to see terms coming out or summaries coming out of their content that they may not have meant or may not want to show to end users. Right? You know, if I I send a document in and and one of the, one of the terms that comes up is one of my competitors, right, that's probably not a good thing to allow somebody to filter their search by or to allow them to to search by if I'm look looking to sell product on my own website or support my own customers. Right? And so from that perspective, there are techniques we can use around how do we mix, you know, NLP with a knowledge graph. So you have kind of a a graph of pre allowed terms. Right? Pre, you know, pre, populated predisposed taxonomies and ideas and concepts. Right? And apply those two together. Right? But a lot of times a lot of time what people will do is they will say, I'm gonna put NLP. That'll be my start. And they spend as much time trying to refine back all of the various things that it showed about their content that they didn't want it to as they do actually generating a taxonomy or generating useful information out of this in the first place. I really like that. And the fact that, I mean, the example you give just, give me a good idea regarding an example that I saw. Even if your stack is very, very accurate in term of NLP and very, very good at matching the understand understanding what the word means, and then if people are expressing a little bit differently the same concept, you're able to find it, you're still missing out on things like seasonality or things that are happening outside your system. So NLP will make your system very, very tight. But remember when COVID started and that that toilet paper thing happened and then people got completely crazy. NLP in your catalog will not reflect that. I mean, words are not changing. Your products are not changing. But then the customer behavior going on your website was completely different. So at this point, that kind of reaction where you wanna promote some products or even some some pipe ahead and just react to what's happening in the environment is something else you cannot take care of. But I mean, both of them are very needed. And that's what I mean by it's not a magical solution. I think modern search engine needs to have a bunch of different components to react to to the modern world, actually. Hanyei, on your end, what is your biggest misconception regarding NLP and search? Yeah. So, I think the biggest one, like, one that Eric mentioned is a really good one. I want to touch base on something else as well, which is, like, a lot of times, these NLP models, they have become so good that people kind of confuse consciousness with intelligence. Let me give you an example without basically getting into the technical definitions of them or, like, philosophical definitions. You might ask a large language model a question like, oh, who was the president of the United States during World War two? And then the model might tell you, hey. It was like Franklin Roosevelt. And you're like, wow. Okay. That's great. Then you go ahead and you ask a question such as, like, how do you feel today? And the model generates something like, I feel very happy today because I got an a in my exam. And that's very human like. Right? At this point, it's very it's completely indistinguishable from a human if you don't know who is sitting on the other side of, like, let's say, this chatbot. But one thing to keep in mind is that these models, they have practically seen the entire corpus of Internet. So, like, they are they have become really good at learning human language, and they can speak it very well. They can, understand it. They can retrieve the data, and then they can also, like, give you the best answer based on the on a purely language model. But this is completely different than a human being telling you that they're happy today. Like, there is something more than just words when a human talks about that. A language model, as I mentioned, has been trained on a large corpus of data. Lots of computational power, so many engineers, so many scientists, they have been behind this language model to make it work as good as you can see today. So this is the intelligence and the hard work of lots of people rather than the consciousness of the language model. So let's just make sure that we distinguish between these two terms. It's, it's fun because I started to work with, GPD three for we're building a demo and we need to generate content, so why not just use this kind of algorithm? And at one point, I was very, like, impressed with the output of the model. So I asked, like, are you are you alive? Where are you? And the guy just responded the guy the algorithm responded, yeah. I'm from Texas. I'm like, okay. You're bullshitting me all the I mean, it's it's so realistic at one point, but it's just not true. It's just words, you know, that that feels right. Kurt, on your end, what's the one of the misconception regarding search NLP that you'd like to share with us? Actually, there is a guy in Texas that is basically responsible for everything. He's very fast diver. He's a very, very fast type. Yeah. I mean, you've gotta admit the guy is good. I I think that I I think what what Hani alluded to earlier, you know, is is a fairly critical point. What you're talking about is when you talk about language in general, there there is a lot of formality that we have. There are structures, albeit there are structures that tend to vary somewhat from language to language, that are common enough that when you basically are looking at, when you're looking at the the, the kinds of interactions, they're relatively predictable. You know, if I say something, there's there's essentially a Bayesian tree that says, okay. Seventy five percent of the time, this is going to be the correct response. Maybe thirty percent of the time, it's gonna be this over here because, okay, you know what, maybe something a little different or maybe there's just a little little switch in the algorithm that says this happens to be according to the path that's taken, you know, a better mechanism. In general, when you're talking about machine learning, you have to understand that what is actually happening in the background is that you've created a very, very, very, very complex function. But it's it's kind of like a, a guy in a a slope, you know, working his way down skier, working his way down a slope, and the specific paths that they take basically influence what they find, where they find it, what is that what is available, and that tends to be very sensitive. So it's sensitive to initial conditions. It's sensitive to the kind of data that's involved. It's also very expensive, you know, when you start talking about these these multiple terabytes. There's only so much information that you can basically gather, and the danger that you run into with that kind of information is that after a while, the sources that you're dealing with are are all essentially going to be relevant from logins because there's only so many places that you can get the data in the first place from. So as a consequence, you know, when you're dealing with NLP, you have to understand that it's always gotta be fed. It's always gotta have some kind of additional information that's coming in that provides context that changes, you know, going back to the the whole issue you've talked about with COVID. If I basically include as part of my NLP process, here's news information feeds that basically are going to drive some kind of external information awareness into the system. That can change the nature of the NLP so that it does more accurately reflect these changes. So it's it's got to be something where you have to be aware of the overall ambient information flow before you can reasonably comfortable, before you can get reasonably comfortable in the in how this information is going to be seen and processed and and from there, you know, presented as as part of it. I don't know if that's truly a myth per se, but I think that it does. It's one way of thinking about this as as, yes, you need to be very cognizant of where your information is coming from, and make sure that that information actually does reflect the broader environment that the the requester is going to be interacting with as well. No matter what kind of AI or ML you're doing, it's always based on data. So the data quality is key here to having a good a good output overall. Folks, thank you so much for for, for for that amazing time we have. Let's take a few questions. Right now I can see a one very interesting question regarding the type of search engines. So people get confused because on the market right now, you see semantic search engine, AI powered search engine. You're gonna have keyword or classic elastic, solid. So maybe, Eric, you're, good to go on that one. What would be your recommendation for someone out there looking at all the choice they have and how to make the right one? Yeah. So I would say, first things first, you know, just being being realistic here. Some of the various words that are out there, I'd say a good chunk of them are going to be marketing. Right? You know, AI powered semantic, you know, etcetera. They all essentially are meaning some of the same things, but they also can mean vastly different things. Right? AI powered, we've been talking about it here. NLP powered. Let's say an NLP powered search engine. Right? It could be powered by models that are twenty years old or it could be powered by GPT three. Right? Technically, yes. Both are accurate. They are NLP powered, but one is a lot more modern, a lot more, you know, comprehensive, robust, etcetera, than the other. Right? And so the short answer here is that fundamentally, those words in my perspective are a good indicator of how the company is thinking and how the company that is developing this platform or the group, the open source group that is developing this platform are thinking, but they also need to be investigated a little bit. Right? There need to be questions on, okay. You say, you know, you're using AI in this. Let's talk about how. Right? How is this fundamentally working? What capabilities are being plugged in here to use this? Because there's going to be a variance amongst the various kind of platforms that are out there on the market. You know, AI, to my mind, is a little bit the modern cloud. If you remember five years ago, everything was cloud powered. Right? AI these days seems to be thrown onto every product under the sun. Even if it's, like, the lowest possible use of AI so they can get that checkbox. Right? And so my advice is, you know, be accepting, but be skeptical. Right? Dig in, ask questions, ask things that you heard about here, right, in terms of, okay. Are you using this transaction? Are you using it from NLP? What sort of NLP is being used? How often is it updating? Where is the data coming from? All of these pieces will let you make a more informed decision by having that, system where the the owners of that platform help you understand the capabilities a little bit better. I really like that. And to dig a little bit deeper, I'm basting in the chat right now a demo. So if anyone out there wants to book a demo with Coveo, we'll be more than happy to open the lid and show you what we do. Shameless plug here. But also to, your to your point, I think one of the most important part is to understand the problem you're trying to solve with search. Having a search engine is pretty cool. I mean, I got one. I like it. But what what kind of problem are you trying to solve? Are you a complex manufacturer and you have these requests from your clients to dig in specific skews and have, like, partial piece number that you're getting out and you're trying to resolve that? Or are you someone trying to deflect cases and questions on a portal and then you get these long tail queries that are super complex? Complex. So it always depends on the type of use case, but I agree with you. Being curious and asking question is the first and the good way to go. We'll cut it short and not short, but we're actually on time. So, I would like to thank you very much, Hannie, Kurt, Eric. It's been a pleasure today to be panelist with you folks. If you have any question, click on the link, ask a demo, and then have a good rest of the day. Bye bye, folks. Thanks, everybody.
Unlocking the Power of Natural Language Search
In this webinar, we discuss innovative natural language processing research from Adobe, Perficient, and Coveo. These findings help organizations better predict intent, surface content, and customize digital experiences for everyone.
Key Takeaways:
- Keep up to date with the latest trends in the evolution of search technology.
- Gain crucial insight into the world of natural language processing (to stay ahead of the competition).
- Learn to enhance your search experience with the help of machine learning (it’s easier than you think!).

Vincent Bernard
Director, R&D, Coveo

Eric Immermann
Practice Director, Search and Retrieval, Coveo

Hanieh Deilamsalehy
Machine Learning Researcher, Coveo

Kurt Cagle
Managing Editor, Data Science Central, Coveo
Next
Next
Make every experience relevant with Coveo

Hey 👋! Any questions? I can have a teammate jump in on chat right now!
1
